- 
                Notifications
    You must be signed in to change notification settings 
- Fork 1.8k
[TRTLLM-4406][feat] LLM sleep & wakeup Part 1: virtual device memory #5034
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TRTLLM-4406][feat] LLM sleep & wakeup Part 1: virtual device memory #5034
Conversation
74bbfe2    to
    065f219      
    Compare
  
    | Cool @tongyuantongyu | 
| 
 
 But this PR is not about  | 
3644b98    to
    08e298b      
    Compare
  
    | /bot run | 
| PR_Github #10325 [ run ] triggered by Bot | 
| PR_Github #10325 [ run ] completed with state  | 
| /bot run | 
| /bot kill | 
| PR_Github #10416 [ run ] triggered by Bot | 
| PR_Github #10418 [ kill ] triggered by Bot | 
| PR_Github #10416 [ run ] completed with state  | 
| PR_Github #10418 [ kill ] completed with state  | 
cd982b5    to
    9feef48      
    Compare
  
    9feef48    to
    217b7c0      
    Compare
  
    | /bot run | 
| PR_Github #10455 [ run ] triggered by Bot | 
| PR_Github #10455 [ run ] completed with state  | 
379f7f4    to
    6610455      
    Compare
  
    | /bot run | 
| PR_Github #10477 [ run ] triggered by Bot | 
| PR_Github #13603 [ run ] triggered by Bot | 
Signed-off-by: Yuan Tong <[email protected]>
b738dc0    to
    ee6676a      
    Compare
  
    | /bot run --disable-fail-fast | 
| PR_Github #13617 [ run ] triggered by Bot | 
| PR_Github #13603 [ run ] completed with state  | 
| /bot kill | 
| PR_Github #13629 [ kill ] triggered by Bot | 
| PR_Github #13617 [ run ] completed with state  | 
| PR_Github #13629 [ kill ] completed with state  | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Just one detail: PyTorch's use_mem_pool appears to be thread-local. If we used thread-local variables instead of globals for the allocator, our API would align better with theirs.
| 
 But that might still cause issues with  | 
| /bot run --disable-fail-fast | 
| PR_Github #13658 [ run ] triggered by Bot | 
| PR_Github #13658 [ run ] completed with state  | 
334daa5    to
    9210aa1      
    Compare
  
    | /bot run --disable-fail-fast | 
| PR_Github #13784 [ run ] triggered by Bot | 
| PR_Github #13784 [ run ] completed with state  | 
| /bot run --disable-fail-fast | 
| PR_Github #13897 [ run ] triggered by Bot | 
| PR_Github #13897 [ run ] completed with state  | 
…VIDIA#5034) Signed-off-by: Yuan Tong <[email protected]> Signed-off-by: Lanyu Liao <[email protected]>
…VIDIA#5034) Signed-off-by: Yuan Tong <[email protected]>
Description
This PR implements virtual device memory that can be released and later reallocated without changing memory address. This paves the road to support LLM sleep and wakeup functionality.
Preliminary test shows we can reduce the GPU memory usage to less than 1GB during sleep.
Test Coverage
C++: cpp/tests/unit_tests/runtime/virtualMemoryTest.cpp
Python: tests/unittest/_torch/test_virtual_memory.py
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
run [--disable-fail-fast --skip-test --stage-list "A10-1, xxx" --gpu-type "A30, H100_PCIe" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-[Post-Merge]-1, xxx"]Launch build/test pipelines. All previously running jobs will be killed.
--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests. Will also run L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-[Post-Merge]-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-[Post-Merge]-1, xxx".kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Chores